Due to the expansion of content in various medias and communication platforms, as well as users access to these facilities, the requirement of shared contents checking is more considered. Specifically, it can be more important in the cultural and social contexts to provide High-quality data for people working in these fields. Detection of offensive contexts is one of important web researches, which is used in textual contents, for example children's contents, cultural, academic, and other subjects. A preprocessed dataset is learned by Machine Learning methods (SVM, Naï ve Bayes and KNN), and the final model can detect the possibility of offensive texts received as inputs. The data, we are looking for, is a collection of searches performed on a Persian search engine. In order to increase contents dataset, these queries have been re-searched in Google and added the first page of the results to the dataset, then we determined whether the data is rude or not (labeling). The selected model will learn this data and then the trained model that can detect the possibility that the input data is offensive. The results show that the precision of the Naï ve Bayes, SVM and KNN models can be 94. 05%, 97. 28% and 86. 48%, respectively.